LogDet Divergence based Metric Learning using Triplet Labels

نویسندگان

  • Jiangyuan Mei
  • Meizhu Liu
  • Hamid Reza Karimi
  • Huijun Gao
چکیده

Metric learning is fundamental to lots of learning algorithms and it plays significant roles in many applications. In this paper, we present a LogDet divergence based metric learning approach to a learn Mahalanobis distance over input space of the instances. In the proposed model, the most natural constraint triplets are used as the labels of the training samples. Meanwhile, in order to avoid overfitting problem, the model uses the LogDet divergence to regularize the obtained Mahalanobis matrix as close as possible to a given matrix. Besides, a cyclic iterative algorithm is presented to solve the objective function and accelerate the metric learning process. Furthermore, this paper constructs a novel dynamic triplets building strategy to guarantee that the most useful triplets are used in every training cycle. Experiments on benchmark data sets demonstrates the proposed model achieve an improved performance when compared with the state-of-theart methods.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Learning Discriminative αβ-Divergences for Positive Definite Matrices

Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possibl...

متن کامل

Learning Discriminative Alpha-Beta-divergence for Positive Definite Matrices (Extended Version)

Symmetric positive definite (SPD) matrices are useful for capturing second-order statistics of visual data. To compare two SPD matrices, several measures are available, such as the affine-invariant Riemannian metric, Jeffreys divergence, Jensen-Bregman logdet divergence, etc.; however, their behaviors may be application dependent, raising the need of manual selection to achieve the best possibl...

متن کامل

Online Linear Regression using Burg Entropy

We consider the problem of online prediction with a linear model. In contrast to existing work in online regression, which regularizes based on squared loss or KL-divergence, we regularize using divergences arising from the Burg entropy. We demonstrate regret bounds for our resulting online gradient-descent algorithm; to our knowledge, these are the first online bounds involving Burg entropy. W...

متن کامل

Metric and Kernel Learning Using a Linear Transformation

Metric and kernel learning arise in several machine learning applications. However, most existing metric learning algorithms are limited to learning metrics over low-dimensional data, while existing kernel learning algorithms are often limited to the transductive setting and do not generalize to new data points. In this paper, we study the connections between metric learning and kernel learning...

متن کامل

Composite Kernel Optimization in Semi-Supervised Metric

Machine-learning solutions to classification, clustering and matching problems critically depend on the adopted metric, which in the past was selected heuristically. In the last decade, it has been demonstrated that an appropriate metric can be learnt from data, resulting in superior performance as compared with traditional metrics. This has recently stimulated a considerable interest in the to...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013